在水分配系统(WDS)的每个节点中始终知道压力有助于安全有效的操作。然而,由于现实生活中的仪器数量有限的仪器而无法收集完整的测量数据。通过观察仅在纸张中介绍了通过观察到有限数量的节点来重建所有节点压力的数据驱动的方法。重建方法基于K局部化光谱滤波器,在水网络上的图形卷积之外。考虑到应用中的特点,讨论了层数,层深度和Chebyshev-Polymomial的数量的影响。另外,示出了加权方法,其中可以通过邻接矩阵将关于摩擦损失的信息嵌入光谱图滤波器。与节点的总数相比,所提出的模型的性能呈现在观察到的不同数量的节点上。加权连接在二进制连接上证明没有益处,但是所提出的模型将节点压力与最多5%相对误差相对于5%的观察比以5%的相对误差重建。通过遵循论文讨论的考虑,通过浅图神经网络实现了结果。
translated by 谷歌翻译
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by special-purpose algorithms designed for such problems. We offer an explanation for this phenomenon based on the concept of class-features variability collapse, which refers to the training dynamics of deep classification networks where the feature embeddings of samples belonging to the same class tend to concentrate around their class means. More specifically, we examine the few-shot error of the learned feature map, which is the classification error of the nearest class-center classifier using centers learned from a small number of random samples from each class. Assuming that the classes appearing in the data are selected independently from a distribution, we show that the few-shot error generalizes from the training data to unseen test data, and we provide an upper bound on the expected few-shot error for new classes (selected from the same distribution) using the average few-shot error for the source classes. Additionally, we show that the few-shot error on the training data can be upper bounded using the degree of class-features variability collapse. This suggests that foundation models can provide feature maps that are transferable to new downstream tasks even with limited data available.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
Property inference attacks against machine learning (ML) models aim to infer properties of the training data that are unrelated to the primary task of the model, and have so far been formulated as binary decision problems, i.e., whether or not the training data have a certain property. However, in industrial and healthcare applications, the proportion of labels in the training data is quite often also considered sensitive information. In this paper we introduce a new type of property inference attack that unlike binary decision problems in literature, aim at inferring the class label distribution of the training data from parameters of ML classifier models. We propose a method based on \emph{shadow training} and a \emph{meta-classifier} trained on the parameters of the shadow classifiers augmented with the accuracy of the classifiers on auxiliary data. We evaluate the proposed approach for ML classifiers with fully connected neural network architectures. We find that the proposed \emph{meta-classifier} attack provides a maximum relative improvement of $52\%$ over state of the art.
translated by 谷歌翻译
作为人类已知的最直观的界面之一,自然语言有可能调解许多涉及人类计算机互动的任务,尤其是在音乐信息检索等以应用程序为中心的领域。在这项工作中,我们探索了跨模式学习,以试图在音乐领域弥合音频和语言。为此,我们提出了Muscall,这是音乐对比的音频学习框架。我们的方法由双重编码架构组成,该体系结构了解音乐音频和描述性句子对之间的对齐方式,生成可用于文本到原告和音频到文本检索的多模式嵌入。多亏了这个属性,肌肉几乎可以转移到任何可以作为基于文本检索的任务转移到任何任务。我们的实验表明,我们的方法在检索音频时的性能要比基线要好得多,该音频与文本描述匹配,相反,与音频查询匹配的文本。我们还证明,我们的模型的多模式对齐能力可以成功扩展到零摄像转移方案,用于流派分类和在两个公共数据集上自动标记。
translated by 谷歌翻译
在本文中,引入了一种新颖的解决方案,用于由深度学习组件构建的视觉同时定位和映射(VSLAM)。所提出的体系结构是一个高度模块化的框架,在该框架中,每个组件在基于视觉的深度学习解决方案的领域中提供了最新的最新技术。该论文表明,通过这些单个构建基块的协同整合,可以创建一个功能高效,有效的全直神经(ATDN)VSLAM系统。引入了嵌入距离损耗函数并使用ATDN体系结构进行了训练。最终的系统在Kitti数据集的子集上设法实现了4.4%的翻译和0.0176 ver/m的旋转误差。所提出的体系结构可用于有效,低延迟的自主驾驶(AD)协助数据库创建以及自动驾驶汽车(AV)控制的基础。
translated by 谷歌翻译
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only $\tilde{\mathcal{O}}(dN)$ and its regret is ${\tilde{\mathcal{O}}}(\sqrt{dNT})$, which is of the same order as that incurred by an optimal single-agent algorithm for $NT$ rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of \emph{both regret and communication cost}. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their \emph{immediate neighbors} through a carefully designed consensus procedure.
translated by 谷歌翻译
我们研究了一个顺序决策问题,其中学习者面临$ k $武装的随机匪徒任务的顺序。对手可能会设计任务,但是对手受到限制,以在$ m $ and的较小(但未知)子集中选择每个任务的最佳组。任务边界可能是已知的(强盗元学习设置)或未知(非平稳的强盗设置)。我们设计了一种基于Burnit subsodular最大化的减少的算法,并表明,在大量任务和少数最佳武器的制度中,它在两种情况下的遗憾都比$ \ tilde {o}的简单基线要小。 \ sqrt {knt})$可以通过使用为非平稳匪徒问题设计的标准算法获得。对于固定任务长度$ \ tau $的强盗元学习问题,我们证明该算法的遗憾被限制为$ \ tilde {o}(nm \ sqrt {m \ tau}+n^{2/3} m \ tau)$。在每个任务中最佳武器的可识别性的其他假设下,我们显示了一个带有改进的$ \ tilde {o}(n \ sqrt {m \ tau}+n^{1/2} {1/2} \ sqrt的强盗元学习算法{m k \ tau})$遗憾。
translated by 谷歌翻译
虽然有几种可用于匈牙利语的源语言处理管道,但它们都不满足当今NLP应用程序的要求。语言处理管道应由接近最先进的lemmatization,形态学分析,实体识别和单词嵌入。工业文本处理应用程序必须满足非功能性的软件质量要求,更重要的是,支持多种语言的框架越来越受青睐。本文介绍了哈普西,匈牙利匈牙利语言处理管道。呈现的工具为最重要的基本语言分析任务提供组件。它是开源,可在许可证下提供。我们的系统建立在Spacy的NLP组件之上,这意味着它快速,具有丰富的NLP应用程序和扩展生态系统,具有广泛的文档和众所周知的API。除了底层模型的概述外,我们还对共同的基准数据集呈现严格的评估。我们的实验证实,母鹿在所有子组织中具有高精度,同时保持资源有效的预测能力。
translated by 谷歌翻译
我们研究了基础模型的能力,以了解可转让给新的看不见的课程的分类的表现。文献中最近的结果表明,单个分类器在许多课程中学到的表示在少量学习问题上具有竞争力,这些问题是由专为这些问题设计的特殊用途算法学习的表示。在本文中,我们基于最近观察到的现象提供了对这种行为的解释,即通过共同计量的分类网络学习的特征显示有趣的聚类属性,称为神经崩溃。理论上,我们在理论上展示了神经崩溃的展示给来自培训类的新样本,更重要的是 - 对于新课程,允许基础模型提供在转移学习中良好工作的特征地图,具体地,少量拍摄设置。
translated by 谷歌翻译